ChatGPT's answers to software engineering questions were 52 per cent incorrect
Despite ChatGPT's popularity, there hasn't been a thorough investigation into the quality and usability of its responses to software engineering queries, said researchers
image for illustrative purpose
New York: OpenAI's ChatGPT answered about 52 per cent software engineering questions incorrectly, according to a study, raising questions about the popular language models accuracy.
Despite ChatGPT's popularity, there hasn't been a thorough investigation into the quality and usability of its responses to software engineering queries, said researchers from the Purdue University in the US.
To address this gap, the team undertook a comprehensive analysis of ChatGPT's replies to 517 questions from Stack Overflow (SO).
"Our examination revealed that 52 per cent of ChatGPT's answers contain inaccuracies and 77 per cent are verbose," the researchers wrote in the paper, not peer-reviewed and published on a pre-print site.
Importantly, the team found that 54 per cent of the time the errors were made due to ChatGPT not understanding the concept of the questions.
Even when it could understand the question, it failed to show an understanding of how to solve the problem, contributing to a high number of conceptual errors, they said.
Further, the researchers observed ChatGPT's limitation to reasoning.
"In many cases, we saw ChatGPT give a solution, code, or formula without foresight or thinking about the outcome," they said.
"Prompt engineering and human-in-the-loop fine-tuning can be helpful in probing ChatGPT to understand a problem to some extent, but they are still insufficient when it comes to injecting reasoning into LLM. Hence it is essential to understand the factors of conceptual errors as well as fix the errors originating from the limitation of reasoning," they added.
Moreover, ChatGPT also suffers from other quality issues such as verbosity, inconsistency, etc. Results of the in-depth manual analysis pointed to a large number of conceptual and logical errors in ChatGPT answers. The linguistic analysis results showed that ChatGPT answers are very formal, and rarely portray negative sentiments.
Nevertheless, users still preferred ChatGPT's responses 39.34 per cent of the time due to its comprehensiveness and articulate language style.
"These findings underscore the need for meticulous error correction in ChatGPT while also raising awareness among users about the potential risks associated with seemingly accurate answers," the researchers said.